.com
Hosted by:
Unit testing expertise at your fingertips!
Home | Discuss | Lists

Conditional Test Logic

The book has now been published and the content of this chapter has likely changed substanstially.
Please see page 200 of xUnit Test Patterns for the latest information.
Also known as: Indented Test Code


A test contains code that may or may not be executed

A Fully Automated Test (see Goals of Test Automation on page X) is just code that verifies the behavior of other code. If this code is complicated how do we verify that it works properly? We could write tests for our tests but when would this recursion stop? The simple answer is that Test Methods (page X) must be simple enough to not need tests.

Conditional Test Logic is one factor that makes tests more complicated than they really should be.

Symptoms

As a code smell, there need not any behavioral symptoms of Conditional Test Logic but it should be reasonably obvious to the test reader. Any control structures within a Test Method should be viewed with extreme suspicion! The test reader may also find themselves wondering which code path is the one that is being executed. The following is an example of Conditional Test Logic that involves both looping and if statements:

      //   verify Vancouver is in the list:
      actual = null;
      i = flightsFromCalgary.iterator();
      while (i.hasNext()) {
         FlightDto flightDto = (FlightDto) i.next();
         if (flightDto.getFlightNumber().equals( expectedCalgaryToVan.getFlightNumber()))
         {
            actual = flightDto;
            assertEquals("Flight from Calgary to Vancouver", expectedCalgaryToVan,
                         flightDto);
            break;
         }
      }
   }
Example LoopingVerificationLogic embedded from java/com/clrstream/ex5/test/FlightManagementFacadeTest.java

This code begs the question "What is this test code doing and how do we know that it is doing it correctly?" One somewhat behavioral symptom is the related project level smell High Test Maintenance Cost (page X) which may be caused by the complexity introduced by Conditional Test Logic.

Impact

The problem with Conditional Test Logic is that it makes it hard to know exactly what a test is going to do when it really matters. Code that has only a single execution path always executes exactly the same way. Code that has multiple execution paths is much harder to be confident in.

To increase our confidence in production code, we write Self-Checking Tests (see Goals of Test Automation) that exercise that code. How can we increase our confidence in the test code if it executes differently each time we run it? It is hard to know (or prove) that the test is verifying the behavior we want it to verify. A test that has branches or loops, or which uses different values each time it is run can be very difficult to debug simply because it isn't completely deterministic.

A related issue is that Conditional Test Logic makes it harder to write the test correctly. Because tests cannot easily be tested, how do we know that it will actually detect the bugs it is intended to catch. (This is a general problem with Obscure Tests (page X); they are more likely to result in Buggy Tests (page X) than simple code.)

Causes

Test automaters may introduce Conditional Test Logic for several reasons:

Some of these reasons are worth looking at in more detail.

Cause: Flexible Test

Test code verifies different functionality depending on when or where it is run.

Symptoms

The test has conditional logic in it that does different things depending on the current environment. Most commonly this shows up as Conditional Test Logic to build different versions of the expected results based on some factor external to the test.

Consider the following test that gets the current time so that it can determine what the output of the SUT should be.

   public void testDisplayCurrentTime_whenever() {
      // fixture setup
      TimeDisplay sut = new TimeDisplay();
      // exercise sut
      String result = sut.getCurrentTimeAsHtmlFragment();
      // verify outcome
      Calendar time = new DefaultTimeProvider().getTime();  
      StringBuffer expectedTime = new StringBuffer();
      expectedTime.append("<span class=\"tinyBoldText\">");
      if ((time.get(Calendar.HOUR_OF_DAY) == 0)
         && (time.get(Calendar.MINUTE) <= 1)) {
         expectedTime.append( "Midnight");
      } else if ((time.get(Calendar.HOUR_OF_DAY) == 12)
                  && (time.get(Calendar.MINUTE) == 0)) { // noon
         expectedTime.append("Noon");
      } else  {
         SimpleDateFormat fr = new SimpleDateFormat("h:mm a");
         expectedTime.append(fr.format(time.getTime()));
      }
      expectedTime.append("</span>");
      assertEquals( expectedTime, result);
   }
Example FlexibleTest embedded from java/com/clrstream/ex7/test/TimeDisplayTest.java

Root Cause

Flexible Test is caused by a lack of control of the environment. The test automater probably wasn't able to decouple the SUT from its dependencies and decided to adapt the test logic based on the state of the environment.

Impact

The first issue with Flexible Test is that it makes the test harder to understand and therefore maintain. The second issue is that we don't know which test scenarios are actually being exercised and whether all the scenarios are in fact being exercised regularly. For example, in our sample test above, does the midnight scenario ever get exercised? How often? Probably rarely if ever because the test would have to be run at exactly midnight and that is pretty unlikely even if we timed the nightly build such that it ran over midnight.

Possible Solution

Flexible Test is best addressed by decoupling the SUT from whatever dependencies were causing the test automater to make the test flexible. This involves refactoring the SUT to support substitutable dependency. This then allows us to replace the dependency with a Test Double (page X) such as a Test Stub (page X) or Mock Object (page X) and write separate tests for each circumstance previously covered by the Flexible Test.

Cause: Conditional Verification Logic

Another cause is the use of Conditional Test Logic (page X) to verify the expected outcome. This is usually caused by wanting to prevent the execution of assertions if the SUT fails to return the right objects or the use of loops to verify the contents of collections returned by the SUT.

      //   verify Vancouver is in the list:
      actual = null;
      i = flightsFromCalgary.iterator();
      while (i.hasNext()) {
         FlightDto flightDto = (FlightDto) i.next();
         if (flightDto.getFlightNumber().equals( expectedCalgaryToVan.getFlightNumber()))
         {
            actual = flightDto;
            assertEquals("Flight from Calgary to Vancouver", expectedCalgaryToVan,
                         flightDto);
            break;
         }
      }
   }
Example LoopingVerificationLogic embedded from java/com/clrstream/ex5/test/FlightManagementFacadeTest.java

Possible Solution

We can replace if statements used to steer execution to a call to fail with a Guard Assertion (page X) that fails the test before we get to the code we don't want to execute unless the test is an Expected Exception Test (see Test Method) in which case we should use the standard coding idiom for the xUnit family member and language.

We can replace Conditional Test Logic for verification of complex objects with an Equality Assertion (see Assertion Method on page X) on an Expected Object. If the production code's equals method is too strict, we can use a Custom Assertion (page X) to define test-specific equality.

We should move loops in verification logic to a Custom Assertion whose behavior can be verified using Custom Assertion Tests (see Custom Assertion).

We can reuse test logic in several tests by calling a Test Utility Method (page X) or a common Parameterized Test (page X) passing in the already built test fixture and/or Expected Objects.

Cause: Production Logic in Test

Symptoms

Some forms of Conditional Test Logic are found in the result verification section of our tests. Let us look more closely inside the loops of this test:

   public void testCombinationsOfInputValues() {
      // Setup Fixture:
      Calculator sut = new Calculator();
      int expected;  // TBD inside loops
      for (int i = 0; i < 10; i++) {
         for (int j = 0; j < 10; j++) {
            // Exercise SUT:
            int actual = sut.calculate( i, j );

            // Verify result:
            if (i==3 & j==4)  // special case
               expected = 8;
            else
               expected = i+j;
   assertEquals(message(i,j), expected, actual);
         }
      }
   }
  
   private String message(int i, int j) {
      return "Cell( " + String.valueOf(i)+ "," + String.valueOf(j) + ")";
}
Example ProductionLogicInTest embedded from java/com/xunitpatterns/misc/LoopingTest.java

The nexted loops in this Loop-Driven Test (see Parameterized Test) are used to exercise the SUT with various combinations of values of i and j as inputs. The Conditional Test Logic inside the loop is what we want to focus on here.

Root Cause

This Production Logic in Test is a direct result of wanting to verify multiple test conditions in a single Test Method. Since there are multiple input values passed to the SUT, we should also have multiple expected results. It is hard to enumerate the expected result for each set of inputs if we pass in many combinations of several input arguments to the SUT in nested loops. A common solution is to use a Calculated Value (see Derived Value on page X) based on the inputs. The potential downfall (as we see here) is when we find ourselves replicating the logic we expect the SUT to contain inside our test to calculate the expected value.

Possible Solution

If at all possible, it is better to enumerate the sets of pre-calculated values with which to test the SUT. Here is an example testing the same logic using a (smaller) set of enumeratd values instead:

   public void testMultipleValueSets() {
      // Setup Fixture:
      Calculator sut = new Calculator();
      TestValues[] testValues = { new TestValues(1,2,3),
                     new TestValues(2,3,5),
                     new TestValues(3,4,8), // special case!
                     new TestValues(4,5,9) };
      for (int i = 0; i < testValues.length; i++) {
         TestValues values = testValues[i];
         // Exercise SUT:
         int actual = sut.calculate( values.a, values.b);
         // Verify result:
         assertEquals(message(i), values.expectedSum, actual);
      }
   }
  
   private String message(int i) {
      return "Row "+ String.valueOf(i);
   }
Example LoopingTest embedded from java/com/xunitpatterns/misc/LoopingTest.java

Cause: Complex Teardown

Symptoms

Complex fixture teardown code is more likely to leave test environment corrupted by not cleaning up correctly. It is hard to verify that it has been written correctly and can easily result in "data leaks" that may later cause this or other tests to fail for no apparent reason. Condider this example:

   public void testGetFlightsByOrigin_NoInboundFlight_SMRTD() throws Exception {
      // Fixture setup
      BigDecimal outboundAirport = createTestAirport("1OF");
      BigDecimal inboundAirport = null;
      FlightDto expFlightDto = null;
      try {
         inboundAirport = createTestAirport("1IF");
         expFlightDto = createTestFlight(outboundAirport, inboundAirport);
         // Exercise System
         List flightsAtDestination1 = facade.getFlightsByOriginAirport(inboundAirport);
         // Verify Outcome
         assertEquals(0,flightsAtDestination1.size());
      } finally {
         try {
            facade.removeFlight(expFlightDto.getFlightNumber());
         } finally {
            try {
               facade.removeAirport(inboundAirport);
            } finally  {
               facade.removeAirport(outboundAirport);
            } 
         }             
      }           
   }
Example SafeMultiResourceGuaranteedTeardown embedded from java/com/clrstream/ex6/services/test/InlineTeardownExampleTest.java

Root Cause

Teardown is typically only required when we use persistent resources that are beyond the reach of our garbage collection system. Complex Teardown occurs when many such resources are used in the same Test Method.

Possible Solution

Complex teardown logic should be avoided by using Implicit Teardown (page X) to make it reusable and testable or Automated Teardown (page X) which can be verified with automated unit tests. We can also avoid the need to tearDown entirely by using a Fresh Fixture (page X) strategy and/or avoiding any persistent objects in our tests. The latter can be replaced by some sort of Test Double.

Cause: Multiple Test Conditions

Symptoms

A test is trying to apply the same test logic to many sets of input values each with their own corresponding expected result. In this example, the test is iterating over a collection of test values and applying the test logic to each set:

   public void testMultipleValueSets() {
      // Setup Fixture:
      Calculator sut = new Calculator();
      TestValues[] testValues = { new TestValues(1,2,3),
                     new TestValues(2,3,5),
                     new TestValues(3,4,8), // special case!
                     new TestValues(4,5,9) };
      for (int i = 0; i < testValues.length; i++) {
         TestValues values = testValues[i];
         // Exercise SUT:
         int actual = sut.calculate( values.a, values.b);
         // Verify result:
         assertEquals(message(i), values.expectedSum, actual);
      }
   }
  
   private String message(int i) {
      return "Row "+ String.valueOf(i);
   }
Example LoopingTest embedded from java/com/xunitpatterns/misc/LoopingTest.java

Root Cause

The test automater is trying to test many test conditions using the same test logic in a single Test Method. In this case it is fairly simple Conditional Test Logic. It could be a lot worse if there were multiple nested loops and maybe even if statements to calculate different cases of the expected values.

Possible Solution

Of all the causes of Conditional Test Logic, Multiple Test Conditions is probably the most innocuous. Other than scaring the test reader, the main impact of a test such as this is that it stops executing at the first failure and therefore doesn't provide Defect Localization (see Goals of Test Automation) when a bug is introduced into the code. The readability issue can easily be addressed by using an Extract Method[Fowler] refactoring to create a Parameterized Test called from within the loop. The lack of Defect Localization can be addressed by calling the Parameterized Test from separate Test Methods for each test condition. For large sets of values, a Data-Driven Test (page X) might be a better solution.



Page generated at Wed Feb 09 16:39:50 +1100 2011

Copyright © 2003-2008 Gerard Meszaros all rights reserved

All Categories
Introductory Narratives
Web Site Instructions
Code Refactorings
Database Patterns
DfT Patterns
External Patterns
Fixture Setup Patterns
Fixture Teardown Patterns
Front Matter
Glossary
Misc
References
Result Verification Patterns
Sidebars
Terminology
Test Double Patterns
Test Organization
Test Refactorings
Test Smells
Test Strategy
Tools
Value Patterns
XUnit Basics
xUnit Members
All "Test Smells"
Code Smells
--Obscure Test
--Conditional Test Logic
----Indented Test Code
----Flexible Test
----Conditional Verification Logic
----Production Logic in Test
----Complex Teardown
----Multiple Test Conditions
--Hard-to-Test Code
--Test Code Duplication
--Test Logic in Production
Behavior Smells
--Assertion Roulette
--Erratic Test
--Fragile Test
--Frequent Debugging
--Manual Intervention
--Slow Tests
Project Smells
--Buggy Tests
--Developers Not Writing Tests
--High Test Maintenance Cost
--Production Bugs